5 research outputs found
Reconstructing the Dynamic Directivity of Unconstrained Speech
This article presents a method for estimating and reconstructing the spatial
energy distribution pattern of natural speech, which is crucial for achieving
realistic vocal presence in virtual communication settings. The method
comprises two stages. First, recordings of speech captured by a real, static
microphone array are used to create an egocentric virtual array that tracks the
movement of the speaker over time. This virtual array is used to measure and
encode the high-resolution directivity pattern of the speech signal as it
evolves dynamically with natural speech and movement. In the second stage, the
encoded directivity representation is utilized to train a machine learning
model that can estimate the full, dynamic directivity pattern given a limited
set of speech signals, such as those recorded using the microphones on a
head-mounted display. Our results show that neural networks can accurately
estimate the full directivity pattern of natural, unconstrained speech from
limited information. The proposed method for estimating and reconstructing the
spatial energy distribution pattern of natural speech, along with the
evaluation of various machine learning models and training paradigms, provides
an important contribution to the development of realistic vocal presence in
virtual communication settings.Comment: In proceedings of I3DA 2023 - The 2023 International Conference on
Immersive and 3D Audio. DOI coming soo
Self-Supervised Representation Learning for Vocal Music Context
In music and speech, meaning is derived at multiple levels of context.
Affect, for example, can be inferred both by a short sound token and by sonic
patterns over a longer temporal window such as an entire recording. In this
paper we focus on inferring meaning from this dichotomy of contexts. We show
how contextual representations of short sung vocal lines can be implicitly
learned from fundamental frequency () and thus be used as a meaningful
feature space for downstream Music Information Retrieval (MIR) tasks. We
propose three self-supervised deep learning paradigms which leverage pseudotask
learning of these two levels of context to produce latent representation
spaces. We evaluate the usefulness of these representations by embedding unseen
vocal contours into each space and conducting downstream classification tasks.
Our results show that contextual representation can enhance downstream
classification by as much as 15 % as compared to using traditional statistical
contour features.Comment: Working on more updated versio
Acoustically-Driven Phoneme Removal That Preserves Vocal Affect Cues
In this paper, we propose a method for removing linguistic information from
speech for the purpose of isolating paralinguistic indicators of affect. The
immediate utility of this method lies in clinical tests of sensitivity to vocal
affect that are not confounded by language, which is impaired in a variety of
clinical populations. The method is based on simultaneous recordings of speech
audio and electroglottographic (EGG) signals. The speech audio signal is used
to estimate the average vocal tract filter response and amplitude envelop. The
EGG signal supplies a direct correlate of voice source activity that is mostly
independent of phonetic articulation. These signals are used to create a third
signal designed to capture as much paralinguistic information from the vocal
production system as possible -- maximizing the retention of bioacoustic cues
to affect -- while eliminating phonetic cues to verbal meaning. To evaluate the
success of this method, we studied the perception of corresponding speech audio
and transformed EGG signals in an affect rating experiment with online
listeners. The results show a high degree of similarity in the perceived affect
of matched signals, indicating that our method is effective.Comment: Submitted to the 2023 IEEE International Conference on Acoustics,
Speech and Signal Processin
A cluster analysis of harmony in the McGill Billboard dataset
We set out to perform a cluster analysis of harmonic structures (specifically, chord-to-chord transitions) in the McGill Billboard dataset, to determine whether there is evidence of multiple harmonic grammars and practices in the corpus, and if so, what the optimal division of songs, according to those harmonic grammars, is. We define optimal as providing meaningful, specific information about the harmonic practices of songs in the cluster, but being general enough to be used as a guide to songwriting and predictive listening. We test two hypotheses in our cluster analysis — first that 5–9 clusters would be optimal, based on the work of Walter Everett (2004), and second that 15 clusters would be optimal, based on a set of user-generated genre tags reported by Hendrik Schreiber (2015). We subjected the harmonic structures for each song in the corpus to a K-means cluster analysis. We conclude that the optimal clustering solution is likely to be within the 5–8 cluster range. We also propose that a map of cluster types emerging as the number of clusters increases from one to eight constitutes a greater aid to our understanding of how various harmonic practices, styles, and sub-styles comprise the McGill Billboard dataset
HEAR: Holistic Evaluation of Audio Representations
What audio embedding approach generalizes best to a wide range of downstream
tasks across a variety of everyday domains without fine-tuning? The aim of the
HEAR benchmark is to develop a general-purpose audio representation that
provides a strong basis for learning in a wide variety of tasks and scenarios.
HEAR evaluates audio representations using a benchmark suite across a variety
of domains, including speech, environmental sound, and music. HEAR was launched
as a NeurIPS 2021 shared challenge. In the spirit of shared exchange, each
participant submitted an audio embedding model following a common API that is
general-purpose, open-source, and freely available to use. Twenty-nine models
by thirteen external teams were evaluated on nineteen diverse downstream tasks
derived from sixteen datasets. Open evaluation code, submitted models and
datasets are key contributions, enabling comprehensive and reproducible
evaluation, as well as previously impossible longitudinal studies. It still
remains an open question whether one single general-purpose audio
representation can perform as holistically as the human ear.Comment: to appear in Proceedings of Machine Learning Research (PMLR): NeurIPS
2021 Competition Trac